home *** CD-ROM | disk | FTP | other *** search
- Path: newshost.cyberramp.net!news
- From: sinan@cyberramp.net (John Noland)
- Newsgroups: comp.os.msdos.programmer,comp.lang.c
- Subject: Re: open vs fopen?
- Date: 17 Feb 1996 01:01:24 GMT
- Organization: Prose Software
- Message-ID: <4g39d4$ej8@newshost.cyberramp.net>
- References: <uEYFxc9nX8WX083yn@mbnet.mb.ca> <4f8bev$6tr@hermes.louisville.edu> <2d3avbl60.alamito@marketgraph.xs4all.nl> <4ftusv$181@newshost.cyberramp.net> <danpop.824430285@rscernix>
- NNTP-Posting-Host: ramp3-28.cyberramp.net
- X-Newsreader: WinVN 0.99.5
-
- In article <danpop.824430285@rscernix>, danpop@mail.cern.ch says...
- >
- >In <4ftusv$181@newshost.cyberramp.net> sinan@cyberramp.net (John Noland) writes:
- >
- >>The open() function and its ilk are normally referred to as the "low-level"
- >>I/O package. fopen() is the "Buffered" or "Standard" I/O package. The
- >>strength of the low-level I/O functions is that they offer excellent control,
- >>particularly when used with binary files.
- >
- >??? What can read/write do on a binary file that fread/fwrite cannot do?
-
- Where in the above did I compare them with each other?
-
- >
- >>If you have a special I/O need, you
- >>can use the low-level I/O routines to fashion the exact I/O package to fit
- >>your needs.
- >
- >Same question as above.
-
- Same question from me again. Are the read/write functions written using fread/
- fwrite or is it vice versa? I believe, and feel free to correct me if I'm
- wrong, that the fread/fwrite functions are written using the read/write
- functions.
-
- >
- >>The standard I/O package is one such creation. It is designed to provide fast
- >>buffered I/O, mostly for text situations.
- > ^^^^^^^^^^^^^^^^^^^^^^^^^^
- >???
-
- I use the stdio routines almost exclusively myself. Even for binary files.
- That doesn't make the above statement wrong. Your statements seem to imply
- that all systems are buffered at the OS level, so who cares what I/O
- approach you use. You're not going to see any difference in I/O speed no
- matter what. While this may be true for your particular situation, it's by
- no means universal. The usefulness of buffering is in working sequentially
- with a file. This makes buffered I/O well suited for text files. That
- doesn't mean you can't or won't read a binary file sequentially or that it
- sucks for use with binary files. Which, judging from what you've written,
- is exactly what you'll think I meant.
-
- >>For a lot of applications, using the
- >>standard I/O package is simpler and more effective than using simple low-level
- >>output. The essential feature is its use of automatic buffering. Buffered
- >>I/O means reading and writing data in large chunks from a file to an array
- >>and back. Reading and writing data in large chunks greatly speeds up the I/O
- >>operations, while storing in an array allows access to the individual bytes.
- >>The advantages of this approach should be apparent.
- >
- >Except that you can read and write data in large chunks using stdio
- >routines. The impact of the extra level of buffering is insignificant.
-
- Isn't the above blurb talking about the stdio routines? Once again, your
- reference to an extra level of buffering is not applicable to all systems.
- If you're really referring to the low-level routines, then your statement
- is not always true.
-
- >
- >>When fopen() is used, several things happen. The file, of course, is opened.
- >>Second, an external character array is created to act as a buffer. The
- >><stdio.h> file has this buffer set to a size of 512 bytes.
- >
- >To a default size of _minimum_ 256 bytes. I'm typing this text on a
- >system which uses a default buffer size of 8192 bytes. The size of the
- >buffer can be controlled by the programmer via the setvbuf function.
- >
-
- That's a big default buffer. How many disk accesses does it take to fill
- that thing? I seriously doubt you would want a buffer that big when reading
- a file randomly. But, maybe your system is so fast you can afford that kind
- of waste. In my <stdio.h> file, the constant BUFSIZ is 512. You say that
- you can control the size of the buffer using setvbuf(). That's true, but it
- doesn't change the default buffer size as you seem to imply. If I do the
- following:
-
- FILE *input, *output;
-
- input = fopen("myfile.in", "r+b");
- setvbuf(input, NULL, _IOFBF, 1024);
-
- output = fopen("myfile.out", "w+b");
-
- do you think output points to a buffer of the size set by setvbuf? I don't
- think this is what you meant.
-
-
- >>Technically, in DOS, all I/O is buffered since DOS itself uses buffers
- >>for disk I/O.
- >
- >The same applies to other operating systems (e.g. Unix) except that they
- >do a much better job at buffering I/O than MSDOS.
-
- Some versions of UNIX don't do any buffering at all.
-
- >
- >>Hope this clarifies things.
- >
- >So many inaccurate statements don't clarify things. On the contrary.
-
- The shit you have to up with when trying to be helpful.
-
- John
-
-
-